-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
non boolean custom evals #82
Conversation
WalkthroughRecent changes enhance the Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CustomPrompt
participant LLM
participant Logger
participant Metrics
User->>CustomPrompt: Initialize(output_type='boolean')
CustomPrompt->>Logger: Log initialization
User->>CustomPrompt: _system_message()
CustomPrompt->>Logger: Log system message construction
CustomPrompt->Return System message
User->>CustomPrompt: _evaluate(kwargs)
CustomPrompt->>Logger: Log evaluation start
CustomPrompt->>LLM: Run completion with constructed prompt
LLM-->>CustomPrompt: Return response
CustomPrompt->>Metrics: Process response based on output type
CustomPrompt-->>User: Return EvalResult
CustomPrompt->>Logger: Log evaluation end
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (2)
- athina/evals/llm/custom_prompt/evaluator.py (5 hunks)
- athina/metrics/metric_type.py (1 hunks)
Additional comments not posted (6)
athina/metrics/metric_type.py (1)
31-31
: LGTM! Verify the usage of the new enum value.The addition of the
CUSTOM_PROMPT_SCORE
enum value looks good. Ensure that it is correctly used in the codebase.Verification successful
The new enum value
CUSTOM_PROMPT_SCORE
is correctly integrated and used in the codebase.
athina/metrics/metric_type.py
: Definition ofCUSTOM_PROMPT_SCORE
.athina/evals/llm/custom_prompt/evaluator.py
: Usage inEvalResultMetric
.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the usage of the new enum value `CUSTOM_PROMPT_SCORE`. # Test: Search for the usage of the new enum value. Expect: Correct usage in the codebase. rg --type python $'CUSTOM_PROMPT_SCORE'Length of output: 69
Script:
#!/bin/bash # Description: Verify the usage of the new enum value `CUSTOM_PROMPT_SCORE`. # Test: Search for the usage of the new enum value. Expect: Correct usage in the codebase. rg 'CUSTOM_PROMPT_SCORE' --glob '*.py'Length of output: 263
athina/evals/llm/custom_prompt/evaluator.py (5)
1-2
: LGTM! Approved imports.The new imports
time
,logger
,EvalResult
,EvalResultMetric
, andMetricType
are necessary for the new functionality and look good.Also applies to: 11-12
20-20
: LGTM! Approved attribute addition.The addition of the
_output_type
attribute is necessary for handling different output types and looks good.
30-30
: LGTM! Approved constructor changes.The initialization of the
_output_type
attribute in the constructor is necessary and looks good.Also applies to: 45-45
105-125
: LGTM! Approved_system_message
method.The
_system_message
method correctly handles 'boolean' and 'numeric' output types by returning appropriate instructions. The default case is handled by calling the superclass method, which looks good.Verification successful
LGTM! Approved
_system_message
method.The
_system_message
method correctly handles 'boolean' and 'numeric' output types by returning appropriate instructions. The default case is handled by calling the superclass method, which has been verified to exist inLlmEvaluator
.
athina/evals/llm/llm_evaluator.py
:LlmEvaluator
class and_system_message
method confirmed.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the superclass method `_system_message`. # Test: Search for the implementation of `_system_message` in the superclass. Expect: Correct implementation. ast-grep --lang python --pattern $'class LlmEvaluator { $$$ def _system_message($_) { $$$ } $$$ }'Length of output: 458
Script:
#!/bin/bash # Description: Verify the superclass method `_system_message`. # Test: Search for the LlmEvaluator class and its methods to verify the existence of the _system_message method. # Search for the LlmEvaluator class definition ast-grep --lang python --pattern 'class LlmEvaluator { $$$ }' # Search for the _system_message method within the LlmEvaluator class ast-grep --lang python --pattern 'class LlmEvaluator { $$$ def _system_message($_) { $$$ } $$$ }'Length of output: 957
Script:
#!/bin/bash # Description: Verify the superclass method `_system_message`. # Test: Search for the LlmEvaluator class and its methods to verify the existence of the _system_message method. # Search for the LlmEvaluator class definition ast-grep --lang python --pattern 'class LlmEvaluator' --json # Search for the _system_message method within the LlmEvaluator class ast-grep --lang python --pattern 'def _system_message' --jsonLength of output: 822
Script:
#!/bin/bash # Description: Verify the superclass method `_system_message`. # Test: Search for the LlmEvaluator class and its methods to verify the existence of the _system_message method. # Search for the LlmEvaluator class definition rg 'class LlmEvaluator' # Search for the _system_message method within the codebase rg 'def _system_message'Length of output: 338
127-176
: LGTM! Approved_evaluate
method.The
_evaluate
method correctly handles different output types and generates evaluation metrics. The method also handles exceptions and logs errors, which looks good.Verification successful
LGTM! Approved
_evaluate
method.The
_evaluate
method correctly handles different output types and generates evaluation metrics. The method also handles exceptions and logs errors, which looks good. The verification confirms that exceptions are appropriately managed across various usages.
athina/evals/base_evaluator.py
athina/evals/guardrails/profanity_free/evaluator.py
athina/evals/guardrails/unusual_prompt/evaluator.py
athina/evals/guardrails/sfw/evaluator.py
athina/evals/guardrails/sensitive_topics/evaluator.py
athina/evals/guardrails/reading_time/evaluator.py
athina/evals/guardrails/toxic_language/evaluator.py
athina/evals/guardrails/restrict_to_topic/evaluator.py
athina/evals/grounded/grounded_evaluator.py
athina/evals/llm/summary_accuracy/evaluator.py
athina/evals/guardrails/no_secrets_present/evaluator.py
athina/evals/function/function_evaluator.py
athina/evals/ragas/ragas_evaluator.py
athina/evals/guardrails/gibberish_text/evaluator.py
athina/evals/guardrails/detect_pii/evaluator.py
athina/evals/guardrails/correct_language/evaluator.py
athina/evals/guardrails/politeness_check/evaluator.py
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the exception handling in the `_evaluate` method. # Test: Search for the usage of the `_evaluate` method. Expect: Correct exception handling. rg --type python $'_evaluate'Length of output: 59
Script:
#!/bin/bash # Description: Verify the exception handling in the `_evaluate` method. # Test: Search for the usage of the `_evaluate` method. Expect: Correct exception handling. rg '_evaluate' --glob '*.py'Length of output: 4653
Script:
#!/bin/bash # Description: Extract the context around the `_evaluate` calls to verify exception handling. # Extract lines around `_evaluate` calls in Python files. rg '_evaluate' --glob '*.py' -A 3 -B 3Length of output: 24784
Script:
#!/bin/bash # Description: Verify exception handling around `_evaluate` calls in identified files. # Check for exception handling context around `_evaluate` calls. rg '_evaluate' --glob '*.py' -A 10 -B 10 | grep -E 'try|except|logger.error|traceback'Length of output: 2088
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (3)
- athina/evals/llm/custom_prompt/evaluator.py (5 hunks)
- athina/metrics/metric_type.py (1 hunks)
- examples/run_custom_eval.ipynb (11 hunks)
Files skipped from review due to trivial changes (1)
- examples/run_custom_eval.ipynb
Files skipped from review as they are similar to previous changes (2)
- athina/evals/llm/custom_prompt/evaluator.py
- athina/metrics/metric_type.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Summary by CodeRabbit
New Features
CustomPrompt
class to handle different output types ('boolean' and 'numeric').Enhancements
CUSTOM_PROMPT_SCORE
andSCORE
) to theMetricType
class.Documentation
run_custom_eval.ipynb
with new execution counts, response texts, and display scores for better clarity on evaluations.